264 research outputs found

    Verifying Web Applications: From Business Level Specifications to Automated Model-Based Testing

    Full text link
    One of reasons preventing a wider uptake of model-based testing in the industry is the difficulty which is encountered by developers when trying to think in terms of properties rather than linear specifications. A disparity has traditionally been perceived between the language spoken by customers who specify the system and the language required to construct models of that system. The dynamic nature of the specifications for commercial systems further aggravates this problem in that models would need to be rechecked after every specification change. In this paper, we propose an approach for converting specifications written in the commonly-used quasi-natural language Gherkin into models for use with a model-based testing tool. We have instantiated this approach using QuickCheck and demonstrate its applicability via a case study on the eHealth system, the national health portal for Maltese residents.Comment: In Proceedings MBT 2014, arXiv:1403.704

    Spatio-temporal variation in the structure of a deep water Posidonia oceanica meadow assessed using non-destructive techniques

    Get PDF
    The Malta-Comino Channel (Maltese islands, central Mediterranean), supports extensive meadows of the seagrass Posidonia oceanica that in some places extend to a depth of around 43 m, which is rare for this seagrass. To assess spatial and temporal variation in the state of the deeper parts of the P. oceanica meadow with time, data on the structural characteristics of the seagrass meadow at its lower bathymetric limit were collected during the summers of 2001, 2003 and 2004 from four stations (two stations within each of two sites) located at a similar depth, over a spatial extent of 500 m. Shoot density was estimated in situ, while data on plant architecture (number of leaves, mean leaf length, and epiphyte load) were successfully obtained using an underwater photographic technique that was specifically designed to avoid destructive sampling of the seagrass. Results indicated that P. oceanica shoot density was lower than that recorded from the same meadow during a study undertaken in 1995; the observed decrease was attributed to the activities of an offshore aquaculture farm that operated during the period 1995–2000 in the vicinity of the meadow. ANOVA indicated significant spatial and temporal variations in meadow structural attributes at both sites during the 3-year study; for example, shoot density values increased overall with time at site A; a indication of potential recovery of the meadow following cessation of the aquaculture operations. Lower shoot density values recorded from site B (compared with site A) were attributed to higher epiphyte loads on the seagrass, relative to those at site A. The findings, which include new data on the structural characteristics of P. oceanica occurring at depths >40 m, are discussed with reference to the use of the nondestructive photographic technique to monitor the state of health of deep water seagrass meadows.peer-reviewe

    An event-driven language for cartographic modelling of knowledge in software development organisations

    Get PDF
    With software engineering now being considered a fully-fledged knowledge industry in which the most valuable asset to an organisation is the knowledge held by its employees [BD08], high staff turnover rates are becoming increasingly worrying. If software engineering organisations are to maintain their competitive edge, they need to ensure that their intellectual capital continues to grow and is not lost as people move in and out of their employ. In this paper, the authors present work involving the formalisation of a language that enables organisations to create and analyse maps of their organisational knowledge. In a more elaborate version of the traditional yellow-pages approach utilised in the cartographic school of thought, the proposed language models various relationships between knowledge assets, uses an event-driven mechanism to determine who knows what within the organisation, and finally provides metrics for detecting three types of risk related to knowledge management in modern software engineering. A three month evaluation of the language is also outlined and results discussed.peer-reviewe

    A Case Study on Graphically Modelling and Detecting Knowledge Mobility Risks

    Get PDF
    As the world continues to increasingly depend on a knowledge economy, companies are realising that their most valuable asset is knowledge held by their employees. This asset is hard to track, manage and retain especially in a situation where employees are free to job-hop for better pay after providing a few weeks’ notice to their employers. In previous work we have defined the concept of knowledge risk, and presented a graph-based approach for detecting it. In this paper, we present the results of a case study which employs knowledge graphs in the context of four software development teams

    Extracting monitors from JUnit tests

    Get PDF
    A large portion of the software development industry relies on testing as the main technique for quality assurance while other techniques which can provide extra guarantees are largely ignored. A case in point is runtime verification which provides assurance that a system’s execution flow is correct at runtime. Compared to testing, this technique has the advantage of checking the actual runs of a system rather than a number of representative testcases. Based on experience with the local industry, one of the main reasons for this lack of uptake of runtime verification is the extra effort required to formally specify the correctness criteria to be checked at runtime — runtime verifiers are typically synthesised from formal specifications. One potential approach to counteract this issue would be to use the information available in tests to automatically obtain monitors. The plausibility of this approach is the similarity between tests and runtime verifiers: tests drive the system under test and check that the outcome is correct while runtime verifiers let the system users drive the system under observation but still has to check that the outcome is as expected.peer-reviewe

    Integrating mutation testing into agile processes through equivalent mutant reduction via differential symbolic execution

    Get PDF
    In agile programming, software development is performed in iterations. To ensure the changes are correct, considerable effort is spent writing comprehensive unit tests. Unit tests are the most basic form of testing and is performed on the smallest or smaller set of code.These unit tests have multiple purposes, the main one being that of acting as a safety net between product releases. However, the value of testing can be called into question if there is no measure of the quality of unit tests. Code coverage analysis is an automated technique which illustrates which statements are covered by tests. However, high code coverage might still not be good enough as whole branches or paths could still go completely untested which in turn leads to false sense of security. Mutation Testing is a technique designed to successfully and realistically identify whether a test suite is satisfactory. In turn, such tests lead to finding bugs within the code. The technique behind mutation testing involves generating variants of a system by modifying operators (called mutants) and executing tests against them. If the test suite is thorough enough, at least one test should fail against every mutant thus rendering that mutant killed. Unkilled mutants would require investigation and potential modification of the test suite.peer-reviewe

    Making mutation testing a more feasible proposition for the industry

    Get PDF
    Software engineering firms find themselves developing systems for customers whose need to compete often leads to situations whereby requirements are vague and/or prone to change. One of the prevalent ways with which the industry deals with this situation is through the adoption of so-called Agile development processes. Such processes enable the evolutionary delivery of software systems in small increments, frequent customer feedback, and, ultimately, software which continuously adapts to changing requirements. In this fluid scenario, developers rely on automated unit tests to gain confidence that any regressions resulting from code changes will be detected. Consequently, trust in the software system can only follow from the quality of the tests. Unfortunately, the industry tends to rely on tools that calculate primitive measures such as statement coverage, a measure which has been shown to provide a false sense of security.peer-reviewe

    Using symbolic execution for equivalent mutant detection

    Get PDF
    Mutation Testing is a fault injection technique used to measure test adequacy score by generating defects (mutations) in a program and checking if its test suite is able to detect such a change. However, this technique suffers from the Equivalent Mutant Problem. Equivalent mutants are mutants which on mutation retain their semantics. Thus, although equivalent mutants are syntactically different, they remain semantically equivalent to the original program. An automated solution which decides equivalence is impossible, as equivalence of non-trivial programs is undecidable. The fact that the Equivalent Mutant Problem is undecidable usually means that human effort is required to decide equivalence. Equivalent mutants are the barrier keeping Mutation Testing from being widely adopted. Moreover, in one study by Irvine et al, the average time taken for each manual mutant classification was fifteen minutes.peer-reviewe

    Search based software engineering

    Get PDF
    Consider the following questions, which are posed by software engineers on a daily basis: 1. What is the smallest set of test cases that will cover all statements in this program? 2. What is the best way to organise classes and methods for this OO design? 3. What is the set of requirements that balances software development cost and customer satisfaction? Whilst these questions seem to be addressing different problems, they do have some notable commonalities. Firstly, they form part of a large set of soft- ware engineering problems which can each be solved by a multitude of potential solutions. That is to say that if one were to ask the above questions to x equally competent engineers, one would likely get back x different yet correct solutions. Secondly, this class of problems is usually tasked with balancing a number of competing constraints. A typical example here is maximising customer satisfaction whilst keeping development costs low. Finally, whilst there is typically no perfect answer (and indeed no precise rules for computing the best solution), good solutions can be recognised. When problems with similar characteristics were encountered in disciplines other than software engineering, they were solved with a large degree of success using search-based techniques. It was this realisation that gave rise to the field of search based software engineering.peer-reviewe
    • …
    corecore